117 resultados para COMPUTER SCIENCE, THEORY

em Deakin Research Online - Australia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper provides an analysis of student experiences of an approach to teaching theory that integrates the teaching of theory and data analysis. The argument that supports this approach is that theory is most effectively taught by using empirical data in order to generate and test propositions and hypotheses, thereby emphasising the dialectic relationship between theory and data through experiential learning. Bachelor of Commerce students in two second-year substantive organisational theory subjects were introduced to this method of learning at a large, multi-campus Australian university. In this paper, we present a model that posits a relationship between students' perceptions of their learning, the enjoyment of the experience and expected future outcomes. The results of our evaluation reveal that a majority of students: <br /><br />&bull;enjoyed this way of learning; <br />&bull;believed that the exercise assisted their learning of substantive theory, computing applications and the nature of survey data; and <br />&bull;felt that what they have learned could be applied elsewhere.<br /><br />We argue that this approach presents the potential to improve the way theory is taught by integrating theory, theory testing and theory development; moving away from teaching theory and analysis in discrete subjects; and, introducing iterative experiences in substantive subjects.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Indexing high dimensional datasets has attracted extensive attention from many researchers in the last decade. Since R-tree type of index structures are known as suffering curse of dimensionality problems, Pyramid-tree type of index structures, which are based on the B-tree, have been proposed to break the curse of dimensionality. However, for high dimensional data, the number of pyramids is often insufficient to discriminate data points when the number of dimensions is high. Its effectiveness degrades dramatically with the increase of dimensionality. In this paper, we focus on one particular issue of curse of dimensionality; that is, the surface of a hypercube in a high dimensional space approaches 100% of the total hypercube volume when the number of dimensions approaches infinite. We propose a new indexing method based on the surface of dimensionality. We prove that the Pyramid tree technology is a special case of our method. The results of our experiments demonstrate clear priority of our novel method.<br />

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recommendations based on off-line data processing has attracted increasing attention from both research communities and IT industries. The recommendation techniques could be used to explore huge volumes of data, identify the items that users probably like, and translate the research results into real-world applications, etc. This paper surveys the recent progress in the research of recommendations based on off-line data processing, with emphasis on new techniques (such as context-based recommendation, temporal recommendation), and new features (such as serendipitous recommendation). Finally, we outline some existing challenges for future research.<br />

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Attribute-Based Encryption (ABE) is a promising cryptographic primitive which significantly enhances the versatility of access control mechanisms. Due to the high expressiveness of ABE policies, the computational complexities of ABE key-issuing and decryption are getting prohibitively high. Despite that the existing Outsourced ABE solutions are able to offload some intensive computing tasks to a third party, the verifiability of results returned from the third party has yet to be addressed. Aiming at tackling the challenge above, we propose a new Secure Outsourced ABE system, which supports both secure outsourced key-issuing and decryption. Our new method offloads all access policy and attribute related operations in the key-issuing process or decryption to a Key Generation Service Provider (KGSP) and a Decryption Service Provider (DSP), respectively, leaving only a constant number of simple operations for the attribute authority and eligible users to perform locally. In addition, for the first time, we propose an outsourced ABE construction which provides checkability of the outsourced computation results in an efficient way. Extensive security and performance analysis show that the proposed schemes are proven secure and practical. &copy; 2013 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper initiates the study of two specific security threats on smart-card-based password authentication in distributed systems. Smart-card-based password authentication is one of the most commonly used security mechanisms to determine the identity of a remote client, who must hold a valid smart card and the corresponding password to carry out a successful authentication with the server. The authentication is usually integrated with a key establishment protocol and yields smart-card-based password-authenticated key agreement. Using two recently proposed protocols as case studies, we demonstrate two new types of adversaries with smart card: 1) adversaries with pre-computed data stored in the smart card, and 2) adversaries with different data (with respect to different time slots) stored in the smart card. These threats, though realistic in distributed systems, have never been studied in the literature. In addition to point out the vulnerabilities, we propose the countermeasures to thwart the security threats and secure the protocols. &copy; 2013 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work we present the definition of strong fuzzy subsethood measure as a unifiying concept for the different notions of fuzzy subsethood that can be found in the literature. We analyze the relations of our new concept with the definitions by Kitainik ( [20]), Young ( [26]) and Sinha and Dougherty ( [23]) and we prove that the most relevant properties of the latter are preserved. We show also several construction methods. &copy; 2014 Old City Publishing, Inc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Texture classification is one of the most important tasks in computer vision field and it has been extensively investigated in the last several decades. Previous texture classification methods mainly used the template matching based methods such as Support Vector Machine and k-Nearest-Neighbour for classification. Given enough training images the state-of-the-art texture classification methods could achieve very high classification accuracies on some benchmark databases. However, when the number of training images is limited, which usually happens in real-world applications because of the high cost of obtaining labelled data, the classification accuracies of those state-of-the-art methods would deteriorate due to the overfitting effect. In this paper we aim to develop a novel framework that could correctly classify textural images with only a small number of training images. By taking into account the repetition and sparsity property of textures we propose a sparse representation based multi-manifold analysis framework for texture classification from few training images. A set of new training samples are generated from each training image by a scale and spatial pyramid, and then the training samples belonging to each class are modelled by a manifold based on sparse representation. We learn a dictionary of sparse representation and a projection matrix for each class and classify the test images based on the projected reconstruction errors. The framework provides a more compact model than the template matching based texture classification methods, and mitigates the overfitting effect. Experimental results show that the proposed method could achieve reasonably high generalization capability even with as few as 3 training images, and significantly outperforms the state-of-the-art texture classification approaches on three benchmark datasets. &copy; 2014 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Biomedical time series clustering that automatically groups a collection of time series according to their internal similarity is of importance for medical record management and inspection such as bio-signals archiving and retrieval. In this paper, a novel framework that automatically groups a set of unlabelled multichannel biomedical time series according to their internal structural similarity is proposed. Specifically, we treat a multichannel biomedical time series as a document and extract local segments from the time series as words. We extend a topic model, i.e., the Hierarchical probabilistic Latent Semantic Analysis (H-pLSA), which was originally developed for visual motion analysis to cluster a set of unlabelled multichannel time series. The H-pLSA models each channel of the multichannel time series using a local pLSA in the first layer. The topics learned in the local pLSA are then fed to a global pLSA in the second layer to discover the categories of multichannel time series. Experiments on a dataset extracted from multichannel Electrocardiography (ECG) signals demonstrate that the proposed method performs better than previous state-of-the-art approaches and is relatively robust to the variations of parameters including length of local segments and dictionary size. Although the experimental evaluation used the multichannel ECG signals in a biometric scenario, the proposed algorithm is a universal framework for multichannel biomedical time series clustering according to their structural similarity, which has many applications in biomedical time series management.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Several grouping proof protocols for RFID systems have been proposed over the years but they are either found to be vulnerable to certain attacks or do not comply with the EPC class-1 gen-2 (C1G2) standard because they use hash functions or other complex encryption schemes. Among other requirements, synchronization of keys, simultaneity, dependence, detecting illegitimate tags, eliminating unwanted tag processing, and denial-of-proof attacks have not been fully addressed by many. Our protocol addresses these important gaps by taking a holistic approach to grouping proofs and provides forward security, which is an open research issue. The protocol is based on simple (XOR) encryption and 128-bit pseudorandom number generators, operations that can be easily implemented on low-cost passive tags. Thus, our protocol enables large-scale implementations and achieves EPC C1G2 compliance while meeting the security requirements.